在本文中,我们提出了与IEEE计算机协会在CVPR 2022上同时与IEEE计算机协会研讨会同时举行的多手术检测挑战。我们的多手术检测挑战旨在检测自动图像操作,包括但不限于图像编辑,图像合成,图像合成,图像,图像,图像,图像合成,图像,图像编辑一代,图像Photoshop等。我们的挑战吸引了来自世界各地的674支团队,约有2000个有效的结果提交数量。我们邀请了前十支球队为挑战提供解决方案,其中三支球队在大结局中获得了奖项。在本文中,我们介绍了前三名团队的解决方案,以增强图像伪造检测领域的研究工作。
translated by 谷歌翻译
我们研究机器学习(ML)和深度学习(DL)算法的能力,基于地下温度观察推断表面/地面交换通量。观察和助势是由代表哥伦比亚河附近的高分辨率数值模型,位于华盛顿州东南部的能源部汉福德遗址附近。随机测量误差,不同幅度的加入合成温度观察。结果表明,两个ML和DL方法可用于推断表面/地面交换通量。 DL方法,尤其是卷积神经网络,当用于用施加的平滑滤波器解释噪声温度数据时越高。然而,ML方法也表现良好,它们可以更好地识别减少数量的重要观察,这对于测量网络优化也是有用的。令人惊讶的是,M1和DL方法比向下通量更好地推断出向上的助焊剂。这与使用数值模型从温度观测推断出来的先前发现与先前的发现与先前的发现相反,并且可能表明将ML或DL推断的组合使用与数值推断相结合可以改善河流系统下方的助焊剂估计。
translated by 谷歌翻译
我们提出寻找^ {\ PI},这是一种新的神经重新渲染方法,其目的是(1)实时提高人类性能捕获系统的低质量重建结果的渲染质量; (2)改善神经翻译网络对看不见的人的泛化能力。我们的主要思想是利用重建几何形象的渲染图像作为帮助预测来自少数参考图像的人特定细节的指导,从而增强重新呈现的结果。鉴于此,我们设计了一个双分支网络。粗略分支旨在修复一些工件(即孔,噪声)并获得渲染输入的粗版本,而详细分支旨在预测来自翘曲的参考的“正确”细节。通过在细节分支的训练中有效地混合来自两个分支的特征来实现渲染图像的指导,这提高了翘曲准确性和细节的保真度。我们展示了我们的方法优于在看不见者上生产高保真图像的最先进的方法。
translated by 谷歌翻译
Ultra-fine entity typing (UFET) predicts extremely free-formed types (e.g., president, politician) of a given entity mention (e.g., Joe Biden) in context. State-of-the-art (SOTA) methods use the cross-encoder (CE) based architecture. CE concatenates the mention (and its context) with each type and feeds the pairs into a pretrained language model (PLM) to score their relevance. It brings deeper interaction between mention and types to reach better performance but has to perform N (type set size) forward passes to infer types of a single mention. CE is therefore very slow in inference when the type set is large (e.g., N = 10k for UFET). To this end, we propose to perform entity typing in a recall-expand-filter manner. The recall and expand stages prune the large type set and generate K (K is typically less than 256) most relevant type candidates for each mention. At the filter stage, we use a novel model called MCCE to concurrently encode and score these K candidates in only one forward pass to obtain the final type prediction. We investigate different variants of MCCE and extensive experiments show that MCCE under our paradigm reaches SOTA performance on ultra-fine entity typing and is thousands of times faster than the cross-encoder. We also found MCCE is very effective in fine-grained (130 types) and coarse-grained (9 types) entity typing. Our code is available at \url{https://github.com/modelscope/AdaSeq/tree/master/examples/MCCE}.
translated by 谷歌翻译
Prior works on Information Extraction (IE) typically predict different tasks and instances (e.g., event triggers, entities, roles, relations) independently, while neglecting their interactions and leading to model inefficiency. In this work, we introduce a joint IE framework, HighIE, that learns and predicts multiple IE tasks by integrating high-order cross-task and cross-instance dependencies. Specifically, we design two categories of high-order factors: homogeneous factors and heterogeneous factors. Then, these factors are utilized to jointly predict labels of all instances. To address the intractability problem of exact high-order inference, we incorporate a high-order neural decoder that is unfolded from a mean-field variational inference method. The experimental results show that our approach achieves consistent improvements on three IE tasks compared with our baseline and prior work.
translated by 谷歌翻译
Multi-modal named entity recognition (NER) and relation extraction (RE) aim to leverage relevant image information to improve the performance of NER and RE. Most existing efforts largely focused on directly extracting potentially useful information from images (such as pixel-level features, identified objects, and associated captions). However, such extraction processes may not be knowledge aware, resulting in information that may not be highly relevant. In this paper, we propose a novel Multi-modal Retrieval based framework (MoRe). MoRe contains a text retrieval module and an image-based retrieval module, which retrieve related knowledge of the input text and image in the knowledge corpus respectively. Next, the retrieval results are sent to the textual and visual models respectively for predictions. Finally, a Mixture of Experts (MoE) module combines the predictions from the two models to make the final decision. Our experiments show that both our textual model and visual model can achieve state-of-the-art performance on four multi-modal NER datasets and one multi-modal RE dataset. With MoE, the model performance can be further improved and our analysis demonstrates the benefits of integrating both textual and visual cues for such tasks.
translated by 谷歌翻译
Ultra-fine entity typing (UFET) aims to predict a wide range of type phrases that correctly describe the categories of a given entity mention in a sentence. Most recent works infer each entity type independently, ignoring the correlations between types, e.g., when an entity is inferred as a president, it should also be a politician and a leader. To this end, we use an undirected graphical model called pairwise conditional random field (PCRF) to formulate the UFET problem, in which the type variables are not only unarily influenced by the input but also pairwisely relate to all the other type variables. We use various modern backbones for entity typing to compute unary potentials, and derive pairwise potentials from type phrase representations that both capture prior semantic information and facilitate accelerated inference. We use mean-field variational inference for efficient type inference on very large type sets and unfold it as a neural network module to enable end-to-end training. Experiments on UFET show that the Neural-PCRF consistently outperforms its backbones with little cost and results in a competitive performance against cross-encoder based SOTA while being thousands of times faster. We also find Neural- PCRF effective on a widely used fine-grained entity typing dataset with a smaller type set. We pack Neural-PCRF as a network module that can be plugged onto multi-label type classifiers with ease and release it in https://github.com/modelscope/adaseq/tree/master/examples/NPCRF.
translated by 谷歌翻译
学习准确的对象探测器通常需要具有精确对象边界框的大规模培训数据。但是,标记此类数据是昂贵且耗时的。随着众包标签过程和对象的歧义可能会引起嘈杂的边界盒注释,对象探测器将遭受退化的训练数据。在这项工作中,我们旨在应对使用不准确的边界框来学习健壮对象探测器的挑战。受到以下事实的启发:本地化精度在分类精度不准确的框中显着遭受不准确的框架的影响,我们建议将分类作为用于完善定位结果的指导信号。具体而言,通过将对象视为一袋实例,我们引入了一种对象感知的多个实例学习方法(OA-MIL),其中具有对象感知的实例选择和对象感知实例扩展。前者旨在选择准确的培训实例,而不是直接使用不准确的框注释。后者的重点是生成高质量的选择实例。关于合成嘈杂数据集的广泛实验(即嘈杂的Pascal VOC和MS-Coco)和真正的嘈杂小麦头数据集证明了我们OA-MIL的有效性。代码可从https://github.com/cxliu0/oa-mil获得。
translated by 谷歌翻译
Multiconer共享的任务旨在检测在多种语言的简短和低文本设置中,在语义上模棱两可且复杂的命名实体。缺乏上下文使人们对歧义的命名实体的认识充满挑战。为了减轻此问题,我们的团队Damo-NLP提出了一个基于知识的系统,我们在其中建立了基于Wikipedia的多语言知识基础,以向指定的实体识别(NER)模型提供相关的上下文信息。给定输入句子,我们的系统有效地从知识库中检索了相关上下文。然后,将原始输入句子加强此类上下文信息,从而可以捕获明显更好的上下文化令牌表示。我们的系统在Multiconer共享任务中赢得了13个曲目中的10个。
translated by 谷歌翻译
构建复杂三维(3D)塑料部件上的精确微纳米金属图案允许制造用于先进应用的功能装置。但是,这种图案目前是昂贵的,需要具有长制造时间的复杂过程。本作者演示了一种用任意复杂的形状制造微纳米3D金属塑料复合结构的方法。在这种方法中,修饰光固化树脂以制备能够允许随后的化学镀(ELP)的活性前体。新开发了一种多材料数字光处理3D打印机,以使含有由标准树脂或彼此嵌套的标准树脂或有源前体树脂制成的区域的部件的制造。这些部件的选择性3D ELP处理提供了各种金属塑料复合部件,其具有复杂的中空微纳米结构,其尺寸小于40μm的尺寸规模特定的拓扑关系。使用这种技术,可以通过传统方法制造的3D金属拓扑,并且可以在塑料部件内产生金属图案作为进一步小型化电子设备的装置。所提出的方法还可以产生具有改善金属粘附到塑料基材的金属涂层。基于该技术,设计并制造了由不同功能性非金属材料和特定金属图案组成的几种传感器。本结果证明了该方法的可行性,并提出了智能3D微纳米电子,3D可穿戴设备,微/纳米传感器和医疗保健领域的潜在应用。
translated by 谷歌翻译